05. Cross Platform Architecture
04 L Cross Platform Architecture
There are numerous ways to organize a multi-platform VR project in Unity, and there’s no clear-cut standard. To get you thinking about project architecture, here are some of the ways you can build your project to support multiple platforms:
- (how we did it in prior videos) two seperate CameraRigs, Input manager script attached to each hand, Input scripts are platform-specific, Disable the CameraRig you’re not building for
This approach was used because it is the most basic, facilitating focus on learning the SDKs rather than abstract hierarchies. However, this approach has some downsides. Having a separate script for each platform can create copy-pasting and can be harder to maintain. Seperate CameraRigs can cause difficulties for when you want external scripts in your scene to target the player, which means you have to tread carefully when targeting the player rig through external code. Also, creating platform specific builds lessens the versatility of your application.
- Two seperate CameraRigs, Input manager on top level of CameraRigs, Input scripts are platform-specific, Disable the CameraRig you’re not building for
The only difference here is that the InputManager goes on the top level CameraRig, and takes the hand controller GameObjects as references. On the plus side this allows you to have all your input in one script and specifically target each separate hand controller. On the downside, more code is required to achieve some of the things you can do natively with OnTriggerStay (like pick up multiple objects at once).
- One CameraRig, Has all components from both CameraRigs, Dynamically enables the correct components at runtime or buildtime
Being able to make a single executable for multiple platforms has its advantages. No matter where someone gets your app or what headset they’re using, they’ll be able to experience what you’ve created. By checking which headset is connected (doable through the UnityEngine.VR class), you can enable the pieces of the CameraRig you need for the connected platform. You can also do the same thing with separate camera rigs.
To detect which headset is being used at runtime and adjust your app accordingly, just do:
using UnityEngine.VR;
public class InsertScriptName: Monobehavior
{
…
Start(){
if(VRDevice.model == “vive”) {
//Enable ViveCameraRig or Vive components
} else if(VRDevice.model == “oculus”) {
//Enable OculusCameraRig or Oculus components
}
}
…
}
Note: Notice that when we converted our menu navigation to the Oculus SDK, we were able to access the ObjectMenu in the same way without having to change that external script. Any changes we made to code affecting the internal workings of the menu would be applied regardless of our platform. This is the power of abstraction for input. Going one level deeper, we could create a delegate system where when a controller does a certain action, say squeeze the trigger, we do a TriggerPress action which functions in any script can subscribe to. So if you start out throwing Oranges but get several powerups which subscribe new functions to TriggerPress, you could be soon throwing fast sparkling Watermelon while summoning a Geko in the middle of a pond, all with the same TriggerPress, changed dynamically at runtime.